图形神经网络(GNN)由于其独特的能力扩展了机器学习(ML)方法,因此引起了极大的关注,该应用程序广泛定义为具有非结构化数据,尤其是图形。与其他机器学习(ML)方式相比,由于源自图类型的不规则性和异质性,图形神经网络(GNN)的加速度更具挑战性。但是,现有的努力主要集中在处理图形的不规则性上,并且没有研究其异质性。为此,我们提出了H-GCN,PL(可编程逻辑)和AIE(AI引擎)的混合加速器,以利用Xilinx Versal自适应计算加速度平台(ACAPS)的新兴异质性(ACAPS)来实现高表现GNN的确定。特别是,H-GCN根据其固有的异质性将每个图分为三个子图,并分别使用PL和AIE处理它们。为了进一步提高性能,我们探索了AIE的稀疏支持,并开发了一种有效的密度感知方法,以自动将稀疏矩阵矩阵乘法(SPMM)的瓷砖自动映射到收缩张量数阵列上。与最先进的GCN加速器相比,H-GCN平均达到1.1〜2.3倍的速度。
translated by 谷歌翻译
越来越多的数据科学和机器学习问题依赖于张量的计算,这些计算比矩阵更好地捕获数据的多路关系和相互作用。当利用这一关键优势时,一个关键的挑战是开发计算上有效的算法,以从张量数据中提取有用的信息,这些信息同时构成腐败和不良条件。本文解决了张量强大的主成分分析(RPCA),该分析旨在从塔克分解下的稀疏腐败污染的观察结果中回收低排名的张量。为了最大程度地减少计算和内存足迹,我们建议通过缩放梯度下降(scaledgd)直接恢复低维张量因子(从量身定制的光谱初始化开始),并与迭代变化的阈值操作相结合腐败。从理论上讲,我们确定所提出的算法以恒定的速率与真实的低级张量线性收敛,而恒定的速率与其条件编号无关,只要损坏的水平不大。从经验上讲,我们证明,通过合成实验和现实世界应用,提出的算法比最先进的矩阵和张量RPCA算法更好,更可扩展的性能。
translated by 谷歌翻译
变压器在计算机视觉中的成功吸引了医学成像社区越来越多的关注。特别是对于医学图像细分,已经介绍了许多基于卷积神经网络(CNN)和变压器的出色混合体系结构,并取得了令人印象深刻的性能。但是,将模块化变压器嵌入CNN中的大多数方法都难以发挥其全部潜力。在本文中,我们提出了一种新型的医学图像分割的混合体系结构,称为Phtrans,该架构可与主要构建基块中的变形金刚和CNN杂交,以产生来自全球和本地特征的层次结构表示,并适应性地汇总它们,旨在完全利用其优势以获得更好的优势。细分性能。具体而言,phtrans遵循U形编码器编码器设计,并在深层阶段引入平行的Hybird模块,其中卷积块和经过修改的3D SWIN变压器分别学习本地特征和全局依赖性,然后统一尺寸,统一尺寸输出以实现特征聚合。超出颅库和自动化心脏诊断挑战数据集以外的多ATLA标签的广泛实验结果证实了其有效性,始终超过了最先进的方法。该代码可在以下网址获得:https://github.com/lseventeen/phtrans。
translated by 谷歌翻译
提供了一种强大而灵活的模型,可用于代表多属数据和多种方式相互作用,在科学和工程中的各个领域中发挥着现代数据科学中的不可或缺的作用。基本任务是忠实地以统计和计算的有效方式从高度不完整的测量中恢复张量。利用Tucker分解中的张量的低级别结构,本文开发了一个缩放的梯度下降(Scaledgd)算法,可以直接恢复具有定制频谱初始化的张量因子,并表明它以与条件号无关的线性速率收敛对于两个规范问题的地面真理张量 - 张量完成和张量回归 - 一旦样本大小高于$ n ^ {3/2} $忽略其他参数依赖项,$ n $是维度张量。这导致与现有技术相比的低秩张力估计的极其可扩展的方法,这些方法具有以下至少一个缺点:对记忆和计算方面的对不良,偏移成本高的极度敏感性,或差样本复杂性保证。据我们所知,Scaledgd是第一算法,它可以同时实现近最佳统计和计算复杂性,以便与Tucker分解进行低级张力完成。我们的算法突出了加速非耦合统计估计在加速非耦合统计估计中的适当预处理的功率,其中迭代改复的预处理器促进轨迹的所需的不变性属性相对于低级张量分解中的底层对称性。
translated by 谷歌翻译
We propose a fully convolutional one-stage object detector (FCOS) to solve object detection in a per-pixel prediction fashion, analogue to semantic segmentation. Almost all state-of-the-art object detectors such as RetinaNet, SSD, YOLOv3, and Faster R-CNN rely on pre-defined anchor boxes. In contrast, our proposed detector FCOS is anchor box free, as well as proposal free. By eliminating the predefined set of anchor boxes, FCOS completely avoids the complicated computation related to anchor boxes such as calculating overlapping during training. More importantly, we also avoid all hyper-parameters related to anchor boxes, which are often very sensitive to the final detection performance. With the only post-processing non-maximum suppression (NMS), FCOS with ResNeXt-64x4d-101 achieves 44.7% in AP with single-model and single-scale testing, surpassing previous one-stage detectors with the advantage of being much simpler. For the first time, we demonstrate a much simpler and flexible detection framework achieving improved detection accuracy. We hope that the proposed FCOS framework can serve as a simple and strong alternative for many other instance-level tasks. Code is available at:tinyurl.com/FCOSv1
translated by 谷歌翻译
Given the increasingly intricate forms of partial differential equations (PDEs) in physics and related fields, computationally solving PDEs without analytic solutions inevitably suffers from the trade-off between accuracy and efficiency. Recent advances in neural operators, a kind of mesh-independent neural-network-based PDE solvers, have suggested the dawn of overcoming this challenge. In this emerging direction, Koopman neural operator (KNO) is a representative demonstration and outperforms other state-of-the-art alternatives in terms of accuracy and efficiency. Here we present KoopmanLab, a self-contained and user-friendly PyTorch module of the Koopman neural operator family for solving partial differential equations. Beyond the original version of KNO, we develop multiple new variants of KNO based on different neural network architectures to improve the general applicability of our module. These variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation) and ERA5 (i.e., one of the largest high-resolution data sets of global-scale climate fields). These demonstrations suggest the potential of KoopmanLab to be considered in diverse applications of partial differential equations.
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译
Humans have internal models of robots (like their physical capabilities), the world (like what will happen next), and their tasks (like a preferred goal). However, human internal models are not always perfect: for example, it is easy to underestimate a robot's inertia. Nevertheless, these models change and improve over time as humans gather more experience. Interestingly, robot actions influence what this experience is, and therefore influence how people's internal models change. In this work we take a step towards enabling robots to understand the influence they have, leverage it to better assist people, and help human models more quickly align with reality. Our key idea is to model the human's learning as a nonlinear dynamical system which evolves the human's internal model given new observations. We formulate a novel optimization problem to infer the human's learning dynamics from demonstrations that naturally exhibit human learning. We then formalize how robots can influence human learning by embedding the human's learning dynamics model into the robot planning problem. Although our formulations provide concrete problem statements, they are intractable to solve in full generality. We contribute an approximation that sacrifices the complexity of the human internal models we can represent, but enables robots to learn the nonlinear dynamics of these internal models. We evaluate our inference and planning methods in a suite of simulated environments and an in-person user study, where a 7DOF robotic arm teaches participants to be better teleoperators. While influencing human learning remains an open problem, our results demonstrate that this influence is possible and can be helpful in real human-robot interaction.
translated by 谷歌翻译
In this work, we focus on instance-level open vocabulary segmentation, intending to expand a segmenter for instance-wise novel categories without mask annotations. We investigate a simple yet effective framework with the help of image captions, focusing on exploiting thousands of object nouns in captions to discover instances of novel classes. Rather than adopting pretrained caption models or using massive caption datasets with complex pipelines, we propose an end-to-end solution from two aspects: caption grounding and caption generation. In particular, we devise a joint Caption Grounding and Generation (CGG) framework based on a Mask Transformer baseline. The framework has a novel grounding loss that performs explicit and implicit multi-modal feature alignments. We further design a lightweight caption generation head to allow for additional caption supervision. We find that grounding and generation complement each other, significantly enhancing the segmentation performance for novel categories. We conduct extensive experiments on the COCO dataset with two settings: Open Vocabulary Instance Segmentation (OVIS) and Open Set Panoptic Segmentation (OSPS). The results demonstrate the superiority of our CGG framework over previous OVIS methods, achieving a large improvement of 6.8% mAP on novel classes without extra caption data. Our method also achieves over 15% PQ improvements for novel classes on the OSPS benchmark under various settings.
translated by 谷歌翻译